xai algorithm
The State of Post-Hoc Local XAI Techniques for Image Processing: Challenges and Motivations
Poh, Rech Leong Tian, Keoh, Sye Loong, Li, Liying
As complex AI systems further prove to be an integral part of our lives, a persistent and critical problem is the underlying black-box nature of such products and systems. In pursuit of productivity enhancements, one must not forget the need for various technology to boost the overall trustworthiness of such AI systems. One example, which is studied extensively in this work, is the domain of Explainable Artificial Intelligence (XAI). Research works in this scope are centred around the objective of making AI systems more transparent and interpretable, to further boost reliability and trust in using them. In this work, we discuss the various motivation for XAI and its approaches, the underlying challenges that XAI faces, and some open problems that we believe deserve further efforts to look into. We also provide a brief discussion of various XAI approaches for image processing, and finally discuss some future directions, to hopefully express and motivate the positive development of the XAI research space.
- Europe > United Kingdom > Scotland > City of Glasgow > Glasgow (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Singapore (0.04)
- (8 more...)
- Research Report (1.00)
- Overview (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
XAI-FUNGI: Dataset resulting from the user study on comprehensibility of explainable AI algorithms
Bobek, Szymon, Korycińska, Paloma, Krakowska, Monika, Mozolewski, Maciej, Rak, Dorota, Zych, Magdalena, Wójcik, Magdalena, Nalepa, Grzegorz J.
With the rapid development of black-box machine learning (ML) models, such as deep neural networks or gradient boosting trees, the need for explanations of their decisions has emerged. This demand has been driven by the increasing implementation of opaque models, in high-risk and critical areas like medicine, healthcare, industry, and law, which laid the foundation for modern research on explainable and interpretable artificial intelligence (XAI). Scientists' efforts in designing XAI algorithms have been further supported by political initiatives such as DARPA's XAI challenge [1], the European Union's GDPR [2], and more recently, the EU AI Act [3]. The shared goal of all these initiatives is to improve the transparency of AI systems, thereby promoting their adoption in areas where trust in AI is not fully established or where the transparency of decisions is crucial for legal and safety reasons. However, as XAI algorithms have been advanced, a new discussion has been initiated, addressing the fundamental challenge of ensuring that the explanations generated by these algorithms are comprehensible to humans. This triggered research on the evaluation of XAI [4], drawing attention from social sciences, which argued that much of the effort in XAI relies solely on researchers' intuition about what constitutes a good explanation. They emphasized that human factors should be integral to the design and evaluation of XAI to ensure its reliability [5]. Recognizing individual human abilities to comprehend algorithmically generated explanations is crucial, as these abilities can vary significantly based on personal information competencies. Additionally, there is a lack of established multidisciplinary methods for measuring these capabilities, as well as datasets that facilitate reproducible evaluations or comprehensive analyses.
- North America > United States (0.54)
- Europe > Poland > Lesser Poland Province > Kraków (0.04)
- Asia > Singapore (0.04)
- (2 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study (0.93)
User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study
Bobek, Szymon, Korycińska, Paloma, Krakowska, Monika, Mozolewski, Maciej, Rak, Dorota, Zych, Magdalena, Wójcik, Magdalena, Nalepa, Grzegorz J.
This study is located in the Human-Centered Artificial Intelligence (HCAI) and focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms, specifically investigating how humans understand and interact with the explanations provided by these algorithms. To achieve this, we employed a multi-disciplinary approach that included state-of-the-art research methods from social sciences to measure the comprehensibility of explanations generated by a state-of-the-art lachine learning model, specifically the Gradient Boosting Classifier (XGBClassifier). We conducted an extensive empirical user study involving interviews with 39 participants from three different groups, each with varying expertise in data science, data visualization, and domain-specific knowledge related to the dataset used for training the machine learning model. Participants were asked a series of questions to assess their understanding of the model's explanations. To ensure replicability, we built the model using a publicly available dataset from the UC Irvine Machine Learning Repository, focusing on edible and non-edible mushrooms. Our findings reveal limitations in existing XAI methods and confirm the need for new design principles and evaluation techniques that address the specific information needs and user perspectives of different classes of AI stakeholders. We believe that the results of our research and the cross-disciplinary methodology we developed can be successfully adapted to various data types and user profiles, thus promoting dialogue and address opportunities in HCAI research. To support this, we are making the data resulting from our study publicly available.
- Europe > Austria > Vienna (0.14)
- Europe > Poland > Lesser Poland Province > Kraków (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (4 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.66)
- Research Report > Experimental Study (0.46)
- Health & Medicine (0.93)
- Government (0.68)
- Education (0.67)
Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis
Sovrano, Francesco, Lognoul, Michael, Vilone, Giulia
Significant investment and development have gone into integrating Artificial Intelligence (AI) in medical and healthcare applications, leading to advanced control systems in medical technology. However, the opacity of AI systems raises concerns about essential characteristics needed in such sensitive applications, like transparency and trustworthiness. Our study addresses these concerns by investigating a process for selecting the most adequate Explainable AI (XAI) methods to comply with the explanation requirements of key EU regulations in the context of smart bioelectronics for medical devices. The adopted methodology starts with categorising smart devices by their control mechanisms (open-loop, closed-loop, and semi-closed-loop systems) and delving into their technology. Then, we analyse these regulations to define their explainability requirements for the various devices and related goals. Simultaneously, we classify XAI methods by their explanatory objectives. This allows for matching legal explainability requirements with XAI explanatory goals and determining the suitable XAI algorithms for achieving them. Our findings provide a nuanced understanding of which XAI algorithms align better with EU regulations for different types of medical devices. We demonstrate this through practical case studies on different neural implants, from chronic disease management to advanced prosthetics. This study fills a crucial gap in aligning XAI applications in bioelectronics with stringent provisions of EU regulations. It provides a practical framework for developers and researchers, ensuring their AI innovations advance healthcare technology and adhere to legal and ethical standards.
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Belgium > Wallonia > Namur Province > Namur (0.04)
- Africa > Mozambique > Gaza Province > Xai-Xai (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.67)
Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements?
Jin, Weina, Li, Xiaoxiao, Hamarneh, Ghassan
Being able to explain the prediction to clinical end-users is a necessity to leverage the power of artificial intelligence (AI) models for clinical decision support. For medical images, a feature attribution map, or heatmap, is the most common form of explanation that highlights important features for AI models' prediction. However, it is unknown how well heatmaps perform on explaining decisions on multi-modal medical images, where each image modality or channel visualizes distinct clinical information of the same underlying biomedical phenomenon. Understanding such modality-dependent features is essential for clinical users' interpretation of AI decisions. To tackle this clinically important but technically ignored problem, we propose the modality-specific feature importance (MSFI) metric. It encodes clinical image and explanation interpretation patterns of modality prioritization and modality-specific feature localization. We conduct a clinical requirement-grounded, systematic evaluation using computational methods and a clinician user study. Results show that the examined 16 heatmap algorithms failed to fulfill clinical requirements to correctly indicate AI model decision process or decision quality. The evaluation and MSFI metric can guide the design and selection of XAI algorithms to meet clinical requirements on multi-modal explanation.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California > Los Angeles County > Santa Monica (0.04)
- (2 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.46)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.83)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
A Novel Explainable Artificial Intelligence Model in Image Classification problem
Cao, Quoc Hung, Nguyen, Truong Thanh Hung, Nguyen, Vo Thanh Khang, Nguyen, Xuan Phong
In recent years, artificial intelligence is increasingly being applied widely in many different fields and has a profound and direct impact on human life. Following this is the need to understand the principles of the model making predictions. Since most of the current high-precision models are black boxes, neither the AI scientist nor the end-user deeply understands what's going on inside these models. Therefore, many algorithms are studied for the purpose of explaining AI models, especially those in the problem of image classification in the field of computer vision such as LIME, CAM, GradCAM. However, these algorithms still have limitations such as LIME's long execution time and CAM's confusing interpretation of concreteness and clarity. Therefore, in this paper, we propose a new method called Segmentation - Class Activation Mapping (SeCAM) that combines the advantages of these algorithms above, while at the same time overcoming their disadvantages. We tested this algorithm with various models, including ResNet50, Inception-v3, VGG16 from ImageNet Large Scale Visual Recognition Challenge (ILSVRC) data set. Outstanding results when the algorithm has met all the requirements for a specific explanation in a remarkably concise time.
- Asia > Vietnam > Bình Định Province (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Frankfurt (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.87)
- Information Technology > Artificial Intelligence > Vision > Image Understanding (0.72)
The XAI Alignment Problem: Rethinking How Should We Evaluate Human-Centered AI Explainability Techniques
Jin, Weina, Li, Xiaoxiao, Hamarneh, Ghassan
Setting proper evaluation objectives for explainable artificial intelligence (XAI) is vital for making XAI algorithms follow human communication norms, support human reasoning processes, and fulfill human needs for AI explanations. In this position paper, we examine the most pervasive human-grounded concept in XAI evaluation, explanation plausibility. Plausibility measures how reasonable the machine explanation is compared to the human explanation. Plausibility has been conventionally formulated as an important evaluation objective for AI explainability tasks. We argue against this idea, and show how optimizing and evaluating XAI for plausibility is sometimes harmful, and always ineffective in achieving model understandability, transparency, and trustworthiness. Specifically, evaluating XAI algorithms for plausibility regularizes the machine explanation to express exactly the same content as human explanation, which deviates from the fundamental motivation for humans to explain: expressing similar or alternative reasoning trajectories while conforming to understandable forms or language. Optimizing XAI for plausibility regardless of the model decision correctness also jeopardizes model trustworthiness, because doing so breaks an important assumption in human-human explanation that plausible explanations typically imply correct decisions, and vice versa; and violating this assumption eventually leads to either undertrust or overtrust of AI models. Instead of being the end goal in XAI evaluation, plausibility can serve as an intermediate computational proxy for the human process of interpreting explanations to optimize the utility of XAI.
- Europe (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- South America > Peru > Lima Department > Lima Province > Lima (0.04)
- (2 more...)
- Health & Medicine (1.00)
- Law (0.93)
- Information Technology > Security & Privacy (0.46)
Hardware Acceleration of Explainable Artificial Intelligence
Machine learning (ML) is successful in achieving human-level artificial intelligence in various fields. However, it lacks the ability to explain an outcome due to its black-box nature. While recent efforts on explainable AI (XAI) has received significant attention, most of the existing solutions are not applicable in real-time systems since they map interpretability as an optimization problem, which leads to numerous iterations of time-consuming complex computations. Although there are existing hardware-based acceleration framework for XAI, they are implemented through FPGA and designed for specific tasks, leading to expensive cost and lack of flexibility. In this paper, we propose a simple yet efficient framework to accelerate various XAI algorithms with existing hardware accelerators. Specifically, this paper makes three important contributions. (1) The proposed method is the first attempt in exploring the effectiveness of Tensor Processing Unit (TPU) to accelerate XAI. (2) Our proposed solution explores the close relationship between several existing XAI algorithms with matrix computations, and exploits the synergy between convolution and Fourier transform, which takes full advantage of TPU's inherent ability in accelerating matrix computations. (3) Our proposed approach can lead to real-time outcome interpretation. Extensive experimental evaluation demonstrates that proposed approach deployed on TPU can provide drastic improvement in interpretation time (39x on average) as well as energy efficiency (69x on average) compared to existing acceleration techniques.
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- North America > United States > California (0.04)
- Asia > China > Hubei Province > Wuhan (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.86)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals
Jin, Weina, Fan, Jianyu, Gromala, Diane, Pasquier, Philippe, Hamarneh, Ghassan
Non-technical end-users are silent and invisible users of the state-of-the-art explainable artificial intelligence (XAI) technologies. Their demands and requirements for AI explainability are not incorporated into the design and evaluation of XAI techniques, which are developed to explain the rationales of AI decisions to end-users and assist their critical decisions. This makes XAI techniques ineffective or even harmful in high-stakes applications, such as healthcare, criminal justice, finance, and autonomous driving systems. To systematically understand end-users' requirements to support the technical development of XAI, we conducted the EUCA user study with 32 layperson participants in four AI-assisted critical tasks. The study identified comprehensive user requirements for feature-, example-, and rule-based XAI techniques (manifested by the end-user-friendly explanation forms) and XAI evaluation objectives (manifested by the explanation goals), which were shown to be helpful to directly inspire the proposal of new XAI algorithms and evaluation metrics. The EUCA study findings, the identified explanation forms and goals for technical specification, and the EUCA study dataset support the design and evaluation of end-user-centered XAI techniques for accessible, safe, and accountable AI.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (2 more...)
- Research Report > Experimental Study (0.93)
- Questionnaire & Opinion Survey (0.90)
- Information Technology > Security & Privacy (0.93)
- Law (0.87)
- Health & Medicine > Therapeutic Area > Endocrinology (0.67)
- Health & Medicine > Diagnostic Medicine > Imaging (0.46)
Transcending XAI Algorithm Boundaries through End-User-Inspired Design
Jin, Weina, Fan, Jianyu, Gromala, Diane, Pasquier, Philippe, Li, Xiaoxiao, Hamarneh, Ghassan
The boundaries of existing explainable artificial intelligence (XAI) algorithms are confined to problems grounded in technical users' demand for explainability. This research paradigm disproportionately ignores the larger group of non-technical end users, who have a much higher demand for AI explanations in diverse explanation goals, such as making safer and better decisions and improving users' predicted outcomes. Lacking explainability-focused functional support for end users may hinder the safe and accountable use of AI in high-stakes domains, such as healthcare, criminal justice, finance, and autonomous driving systems. Built upon prior human factor analysis on end users' requirements for XAI, we identify and model four novel XAI technical problems covering the full spectrum from design to the evaluation of XAI algorithms, including edge-case-based reasoning, customizable counterfactual explanation, collapsible decision tree, and the verifiability metric to evaluate XAI utility. Based on these newly-identified research problems, we also discuss open problems in the technical development of user-centered XAI to inspire future research. Our work bridges human-centered XAI with the technical XAI community, and calls for a new research paradigm on the technical development of user-centered XAI for the responsible use of AI in critical tasks.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada > British Columbia (0.04)
- (4 more...)
- Health & Medicine > Consumer Health (0.48)
- Health & Medicine > Diagnostic Medicine (0.46)
- Health & Medicine > Therapeutic Area (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)